Goto

Collaborating Authors

 imbalance ratio



A Proof for Claim

Neural Information Processing Systems

CIFAR-10-L T, CIFAR-100-L T, ImageNet-100-L T, and Places-L T are 5, 80, 50, and 182 respectively. Our default training set of each dataset is summarized in Table 8.



DRAUC: An Instance-wise Distributionally Robust AUC Optimization Framework

Neural Information Processing Systems

Distributionally Robust Optimization (DRO) enhances model performance by optimizing it for the local worst-case scenario, but directly integrating AUC optimization with DRO results in an intractable optimization problem.







Co-ModalityGraphContrastiveLearning forImbalanced NodeClassification-Appendix

Neural Information Processing Systems

InCM-GCL, we can either takethe textfeaturexT orthe image featurexI asthe content feature, and consider the corresponding text encoderfT or image encoderfI as the content encoder. In this section, we discuss the settings of baseline models for imbalanced node classification over fourgraphs. G1: We convert the rich text content into the bag-of-words feature vectors, and further feed the feature vectors with different imbalance ratios to a two-layer MLP [7] classifier to get the classification results. For AMiner, YelpChi, and GitHub graph datasets, we implement CHI-Square [11]toselect useful feature words. G2: We implement three graph neural network based representation learning models including GCN [5], GAT [9], and GraphSAGE [2] to learn the node embeddings by leveraging both node feature (bag-of-words feature vector) andgraph structure information.